细胞因子释放综合征(CRS),也称为细胞因子风暴,是嵌合抗原受体疗法的最大程度不良反应之一,在癌症治疗中表现出了有希望的结果。当出现时,可以通过分析特异性细胞因子和趋化因子谱的分析来识别CR,这些因子和趋化因子谱倾向于在患者之间表现出相似性。在本文中,我们使用机器学习算法利用了这些相似性,并着手开拓元观看知识的方法,以基于特定的细胞因子峰浓度和先前临床研究的证据来识别CRS。我们认为,这样的方法可以通过将临床医生与过去临床研究的CR知识相匹配,以分析可疑的细胞因子谱,以Swift CRS诊断的最终目的。在使用Real-World CRS临床数据评估期间,我们强调了我们提出的产生可解释结果方法的潜力,除了有效地识别细胞因子风暴的发作。
translated by 谷歌翻译
为了掌握语言模型来帮助物理研究,他们必须首先编码数学和自然语言话语的表示,这导致相干的解释,正确的排序和陈述的相关性。我们提出了一系列开发的数据集,以评估语言模型在这方面的性能,这对句子排序,位置,截面预测和话语一致性的能力测量。数据分析显示了物理话语中最常见的方程和子学科,以及方程式和表达的句子级频率。我们提出了基本的基线,展示了当代语言模型在物理学中的一致性相关任务是如何挑战的,即使在数学自然语言目标上培训也是如此。
translated by 谷歌翻译
为了解释神经NLI模型及其推理策略,我们进行了一个系统的探测研究,调查了这些模型是否捕获了自然逻辑的至关重要:单调性和概念包容性。在向下单调上下文中正确识别有效推论是NLI性能的已知绊脚石,包括否定范围和广义量子等语言现象。要了解这种困难,我们将单调性强调为上下文的属性,并检查模型在中文嵌入中捕获单调信息的程度,这些嵌入式是其决策过程的中间嵌入。绘制最近探测范式的进步,我们比较各种模型的单调性功能的存在。我们发现,单调信息在基准测试中实现高分的流行NLI模型的表现中,并观察到基于微调策略的这些模型的改进引入了更强大的单调性功能,以及他们在挑战集上的提高性能。
translated by 谷歌翻译
复杂的模型,例如神经网络(NNS),由许多相互关联的组件组成。为了表示这些模型,启发和表征组件之间的关系至关重要。也许是由于这个,作为“关系图标”的图表是表示复杂模型的普遍媒介。目前,用于传达NN体系结构的图表非常多样化。图解选择的多样性为洞悉沟通优先级的方面提供了机会。在对NN图的哲学探索中,我们整合了概念模型,交流理论和符号学的理论。
translated by 谷歌翻译
We describe a Physics-Informed Neural Network (PINN) that simulates the flow induced by the astronomical tide in a synthetic port channel, with dimensions based on the Santos - S\~ao Vicente - Bertioga Estuarine System. PINN models aim to combine the knowledge of physical systems and data-driven machine learning models. This is done by training a neural network to minimize the residuals of the governing equations in sample points. In this work, our flow is governed by the Navier-Stokes equations with some approximations. There are two main novelties in this paper. First, we design our model to assume that the flow is periodic in time, which is not feasible in conventional simulation methods. Second, we evaluate the benefit of resampling the function evaluation points during training, which has a near zero computational cost and has been verified to improve the final model, especially for small batch sizes. Finally, we discuss some limitations of the approximations used in the Navier-Stokes equations regarding the modeling of turbulence and how it interacts with PINNs.
translated by 谷歌翻译
We propose JFP, a Joint Future Prediction model that can learn to generate accurate and consistent multi-agent future trajectories. For this task, many different methods have been proposed to capture social interactions in the encoding part of the model, however, considerably less focus has been placed on representing interactions in the decoder and output stages. As a result, the predicted trajectories are not necessarily consistent with each other, and often result in unrealistic trajectory overlaps. In contrast, we propose an end-to-end trainable model that learns directly the interaction between pairs of agents in a structured, graphical model formulation in order to generate consistent future trajectories. It sets new state-of-the-art results on Waymo Open Motion Dataset (WOMD) for the interactive setting. We also investigate a more complex multi-agent setting for both WOMD and a larger internal dataset, where our approach improves significantly on the trajectory overlap metrics while obtaining on-par or better performance on single-agent trajectory metrics.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Small to medium-scale data science experiments often rely on research software developed ad-hoc by individual scientists or small teams. Often there is no time to make the research software fast, reusable, and open access. The consequence is twofold. First, subsequent researchers must spend significant work hours building upon the proposed hypotheses or experimental framework. In the worst case, others cannot reproduce the experiment and reuse the findings for subsequent research. Second, suppose the ad-hoc research software fails during often long-running computationally expensive experiments. In that case, the overall effort to iteratively improve the software and rerun the experiments creates significant time pressure on the researchers. We suggest making caching an integral part of the research software development process, even before the first line of code is written. This article outlines caching recommendations for developing research software in data science projects. Our recommendations provide a perspective to circumvent common problems such as propriety dependence, speed, etc. At the same time, caching contributes to the reproducibility of experiments in the open science workflow. Concerning the four guiding principles, i.e., Findability, Accessibility, Interoperability, and Reusability (FAIR), we foresee that including the proposed recommendation in a research software development will make the data related to that software FAIRer for both machines and humans. We exhibit the usefulness of some of the proposed recommendations on our recently completed research software project in mathematical information retrieval.
translated by 谷歌翻译
Artificial Intelligence (AI) is having a tremendous impact across most areas of science. Applications of AI in healthcare have the potential to improve our ability to detect, diagnose, prognose, and intervene on human disease. For AI models to be used clinically, they need to be made safe, reproducible and robust, and the underlying software framework must be aware of the particularities (e.g. geometry, physiology, physics) of medical data being processed. This work introduces MONAI, a freely available, community-supported, and consortium-led PyTorch-based framework for deep learning in healthcare. MONAI extends PyTorch to support medical data, with a particular focus on imaging, and provide purpose-specific AI model architectures, transformations and utilities that streamline the development and deployment of medical AI models. MONAI follows best practices for software-development, providing an easy-to-use, robust, well-documented, and well-tested software framework. MONAI preserves the simple, additive, and compositional approach of its underlying PyTorch libraries. MONAI is being used by and receiving contributions from research, clinical and industrial teams from around the world, who are pursuing applications spanning nearly every aspect of healthcare.
translated by 谷歌翻译
在本文中,我们研究了DRL算法在本地导航问题的应用,其中机器人仅配备有限​​量距离的外部感受传感器(例如LIDAR),在未知和混乱的工作区中朝着目标位置移动。基于DRL的碰撞避免政策具有一些优势,但是一旦他们学习合适的动作的能力仅限于传感器范围,它们就非常容易受到本地最小值的影响。由于大多数机器人在非结构化环境中执行任务,因此寻求能够避免本地最小值的广义本地导航政策,尤其是在未经训练的情况下,这是非常兴趣的。为此,我们提出了一种新颖的奖励功能,该功能结合了在训练阶段获得的地图信息,从而提高了代理商故意最佳行动方案的能力。另外,我们使用SAC算法来训练我们的ANN,这表明在最先进的文献中比其他人更有效。一组SIM到SIM和SIM到现实的实验表明,我们提出的奖励与SAC相结合的表现优于比较局部最小值和避免碰撞的方法。
translated by 谷歌翻译